Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 37
Filter
Add filters

Journal
Document Type
Year range
1.
Jurnal Kejuruteraan ; 35(3):577-586, 2023.
Article in English | Web of Science | ID: covidwho-20241685

ABSTRACT

Impact of COVID-19 pandemic is widespread imposing limitations on the healthcare services all over the world. Due to this pandemic, governments around the world have imposed restrictions that limit individual freedom and have enforced social distance to prevent the collapse of national health care systems. In such situation, to offer medical care and rehabilitation to the patients, Telerehabilitation (TR) is a promising way of delivering healthcare facilities remotely using telecommunication and internet. Technological advancement has played the vital role to establish this TR technology to remotely assess patient's physical condition and act accordingly during this pandemic. Likewise, Human Activity Recognition (HAR) is a key part of the recovery process for a wide variety of conditions, such as stroke, arthritis, brain injury, musculoskeletal injuries, Parkinson's disease, and others. Different approaches of human activity recognition can be utilized to monitor the health and activity levels of such a patient effectively and TR allows to do this remotely. Therefore, in situations where conventional care is inadequate, combination of telerehabilitation and HAR approaches can be an effective means of providing treatment and these opportunities have become patently apparent during the COVID-19 outbreak. However, this new era of technical progress has significant limitations, and in this paper, our main focus is on the challenges of telerehabilitation and the various human activity recognition approaches. This study will help researchers identify a good activity detection platform for a TR system during and after COVID-19, considering TR and HAR challenges.

2.
IEEE Transactions on Emerging Topics in Computing ; : 1-12, 2023.
Article in English | Scopus | ID: covidwho-20234808

ABSTRACT

Moved by the necessity, also related to the ongoing COVID-19 pandemic, of the design of innovative solutions in the context of digital health, and digital medicine, Wireless Body Area Networks (WBANs) are more and more emerging as a central system for the implementation of solutions for well-being and healthcare. In fact, by elaborating the data collected by a WBAN, advanced classification models can accurately extract health-related parameters, thus allowing, as examples, the implementations of applications for fitness tracking, monitoring of vital signs, diagnosis, and analysis of the evolution of diseases, and, in general, monitoring of human activities and behaviours. Unfortunately, commercially available WBANs present some technological and economic drawbacks from the point of view, respectively, of data fusion and labelling, and cost of the adopted devices. To overcome existing issues, in this paper, we present the architecture of a low-cost WBAN, which is built upon accessible off-the-shelf wearable devices and an Android application. Then, we report its technical evaluation concerning resource consumption. Finally, we demonstrate its versatility and accuracy in both medical and well-being application scenarios. Author

3.
57th Annual Conference on Information Sciences and Systems, CISS 2023 ; 2023.
Article in English | Scopus | ID: covidwho-2320107

ABSTRACT

Fitness activities are beneficial to one's health and well-being. During the Covid-19 pandemic, demand for virtual trainers increased. There are current systems that can classify different exercises, and there are other systems that provide feedback on a specific exercise. We propose a system that can simultaneously recognize a pose as well as provide real-time corrective feedback on the performed exercise with the least latency between recognition and correction. In all computer vision techniques implemented so far, occlusion and a lack of labeled data are the most significant problems in correctly detecting and providing helpful feedback. Vector geometry is employed to calculate the angles between key points detected on the body to provide the user with corrective feedback and count the repetitions of each exercise. Three different architectures-GAN, Conv-LSTM, and LSTM-RNN are experimented with, for exercise recognition. A custom dataset of Jumping Jacks, Squats, and Lunges is used to train the models. GAN achieved a 92% testing accuracy but struggled in real-time performance. The LSTM-RNN architecture yielded a 95% testing accuracy and ConvLSTM obtained an accuracy of 97% on real-time sequences. © 2023 IEEE.

4.
1st International Conference on Futuristic Technologies, INCOFT 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2314101

ABSTRACT

COVID has made education shift towards online mode. In online mode, instructors have a hard time keeping track of their students. Students' performance in online classes falls considerably below the level of learning due to a lack of attention. This initiative aids in the supervision of students during online classes. Artificial Intelligence (AI) models are being developed to better recognize student activities during online sessions. Many applications rely on determining an individual's mental state. When evaluating which subtask is the most challenging, a quantitative measure of human activity while executing a task can be helpful. Thus, the goal of this research is to create an algorithm that uses EEG data gathered with a Muse headset to measure the amount of cognitive intelligence of students during online classes. The data collected by the Muse headset is multidimensional, and it is pre-processed before being fed into machine learning algorithms. Using feature selection, the dataset's dimension is now reduced. The model's precision and recall were calculated, and a confusion matrix was created. The Support Vector Machine produces better outcomes in the experiment. © 2022 IEEE.

5.
Dissertation Abstracts International: Section B: The Sciences and Engineering ; 84(6-B):No Pagination Specified, 2023.
Article in English | APA PsycInfo | ID: covidwho-2293466

ABSTRACT

The number of Internet-of-Things (IoT) and edge devices has exploded in the last decade, providing new opportunities to sense and enable many applications to transform everyday people's lives. Wide-scale time series data collected through such devices, coupled with advances in learning technologies, can transform how people interact with their environment. However, as we enter the era of ubiquitous computing, there is a growing need for methods that are easy to use, computationally feasible, and require minimal human supervision to sense human activities by analyzing large-scale data. The goal of this research work is to propose data-driven techniques that focus on human activity sensing at different scales.The first part of the thesis focuses on human activity sensing at building scale for smart indoor environments. Towards that end, this work emphasizes general-purpose human activity sensing using ambient sensors for context-aware computing in smart environments. A deep neural network-based technique for sensing human-environment interaction is proposed and experiments explored interpretability for different ambient sensors and their contribution to model performance to avoid data redundancy. Identifying the challenge of distribution shift in long-term activity sensing, the thesis next focuses on time series partitioning for unlabeled IoT sensor streams, which is an important step toward continuous human activity sensing. This work proposes Cadence, a generalized change point detection technique that detects change points through hypothesis testing by learning a data representation specifically with the segmentation objective. Experiments show that it is sample-efficient, unsupervised, and can robustly detect time-series events across different applications while needing only 9-93 seconds for training.The second part of the thesis focuses on human activity sensing at city scale using large-scale spatio-temporal data. A framework is introduced for sensing urban activity and policy compliance during the COVID-19 crisis using vision and language-based sensing from street view images. Understanding the challenges of street view image usage in urban sensing due to its large scale and distribution variance, a data-driven framework is proposed to evaluate the quality of information in urban scale street view images based on quality attributes capturing spatial, temporal, and content information present in the data. Our experiments show that such framework can be useful for ranking, querying, and improving spatio-temporal data quality and usage in urban computing and activity sensing. We believe such techniques can be useful to model our living patterns by analyzing large-scale data and improve the quality of our life through applications such as home automation, energy optimization, and personalized healthcare. (PsycInfo Database Record (c) 2023 APA, all rights reserved)

6.
International Journal of Web Information Systems ; 2023.
Article in English | Scopus | ID: covidwho-2301623

ABSTRACT

Purpose: This paper aims to implement and extend the You Only Live Once (YOLO) algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural network once to detect the objects in an image, which is why it is powerful and fast. Cameras are found at many different crossroads and locations, but video processing of the feed through an object detection algorithm allows determining and tracking what is captured. Video Surveillance has many applications such as Car Tracking and tracking of people related to crime prevention. This paper provides exhaustive comparison between the existing methods and proposed method. Proposed method is found to have highest object detection accuracy. Design/methodology/approach: The goal of this research is to develop a deep learning framework to automate the task of analyzing video footage through object detection in images. This framework processes video feed or image frames from CCTV, webcam or a DroidCam, which allows the camera in a mobile phone to be used as a webcam for a laptop. The object detection algorithm, with its model trained on a large data set of images, is able to load in each image given as an input, process the image and determine the categories of the matching objects that it finds. As a proof of concept, this research demonstrates the algorithm on images of several different objects. This research implements and extends the YOLO algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural network once to detect the objects in an image, which is why it is powerful and fast. Cameras are found at many different crossroads and locations, but video processing of the feed through an object detection algorithm allows determining and tracking what is captured. For video surveillance of traffic cameras, this has many applications, such as car tracking and person tracking for crime prevention. In this research, the implemented algorithm with the proposed methodology is compared against several different prior existing methods in literature. The proposed method was found to have the highest object detection accuracy for object detection and activity recognition, better than other existing methods. Findings: The results indicate that the proposed deep learning–based model can be implemented in real-time for object detection and activity recognition. The added features of car crash detection, fall detection and social distancing detection can be used to implement a real-time video surveillance system that can help save lives and protect people. Such a real-time video surveillance system could be installed at street and traffic cameras and in CCTV systems. When this system would detect a car crash or a fatal human or pedestrian fall with injury, it can be programmed to send automatic messages to the nearest local police, emergency and fire stations. When this system would detect a social distancing violation, it can be programmed to inform the local authorities or sound an alarm with a warning message to alert the public to maintain their distance and avoid spreading their aerosol particles that may cause the spread of viruses, including the COVID-19 virus. Originality/value: This paper proposes an improved and augmented version of the YOLOv3 model that has been extended to perform activity recognition, such as car crash detection, human fall detection and social distancing detection. The proposed model is based on a deep learning convolutional neural network model used to detect objects in images. The model is trained using the widely used and publicly available Common Objects in Context data set. The proposed model, being an extension of YOLO, can be implemented for real-time object and activity recognition. The proposed model had higher accuracies for both large-scale and all-scale object detection. This proposed model also exceeded all the other previous methods that were compared in extending and augmenting the object detection to activity recognition. The proposed model resulted in the highest accuracy for car crash detection, fall detection and social distancing detection. © 2023, Emerald Publishing Limited.

7.
Vis Comput ; : 1-21, 2022 Apr 13.
Article in English | MEDLINE | ID: covidwho-2301568

ABSTRACT

It is a nontrivial task to manage crowds in public places and recognize unacceptable behavior (such as violating social distancing norms during the COVID-19 pandemic). In such situations, people should avoid loitering (unnecessary moving out in public places without apparent purpose) and maintain a sufficient physical distance. In this study, a multi-object tracking algorithm has been introduced to improve short-term object occlusion, detection errors, and identity switches. The objects are tracked through bounding box detection and with linear velocity estimation of the object using the Kalman filter frame by frame. The predicted tracks are kept alive for some time, handling the missing detections and short-term object occlusion. ID switches (mainly due to crossing trajectories) are managed by explicitly considering the motion direction of the objects in real time. Furthermore, a novel approach to detect unusual behavior of loitering with a severity level is proposed based on the tracking information. An adaptive algorithm is also proposed to detect physical distance violation based on the object dimensions for the entire length of the track. At last, a mathematical approach to calculate actual physical distance is proposed by using the height of a human as a reference object which adheres more specific distancing norms. The proposed approach is evaluated in traffic and pedestrian movement scenarios. The experimental results demonstrate a significant improvement in the results.

8.
1st IEEE International Conference on Automation, Computing and Renewable Systems, ICACRS 2022 ; : 736-742, 2022.
Article in English | Scopus | ID: covidwho-2284161

ABSTRACT

"Human Activity Recognition" (HAR) refers to the ability to recognise human physical movements using wearable devices or IoT sensors. In this epidemic, the majority of patients, particularly the elderly and those who are extremely ill, are placedin isolation units. Because of the quick development of COVID, it's tough for caregivers or others to keepan eye on them when they're in the same room. People are fitted with wearable gadgets to monitor them and take required precautions, and IoT-based video capturing equipment is installed in the isolation ward. The existing systems are designed to record and categorise six common actions, including walking, jogging, going upstairs, downstairs, sitting, and standing, using multi-class classification algorithms. This paper discussed the advantages and limitations associated with developing the model using deep learning approaches on the live streaming data through sensors using different publicly available datasets. © 2022 IEEE

9.
IEEE Sensors Journal ; 23(2):969-976, 2023.
Article in English | Scopus | ID: covidwho-2244030

ABSTRACT

The recent SARS-COV-2 virus, also known as COVID-19, badly affected the world's healthcare system due to limited medical resources for a large number of infected human beings. Quarantine helps in breaking the spread of the virus for such communicable diseases. This work proposes a nonwearable/contactless system for human location and activity recognition using ubiquitous wireless signals. The proposed method utilizes the channel state information (CSI) of the wireless signals recorded through a low-cost device for estimating the location and activity of the person under quarantine. We propose to utilize a Siamese architecture with combined one-dimensional convolutional neural networks (1-D-CNNs) and bi-directional long short-term memory (Bi-LSTM) networks. The proposed method provides high accuracy for the joint task and is validated on two real-world testbeds, first, using the designed low-cost CSI recording hardware, and second, on a public dataset for joint activity and location estimation. The human activity recognition (HAR) results outperform state-of-the-art machine and deep learning methods, and localization results are comparable with the existing methods. © 2001-2012 IEEE.

10.
IEEE Sensors Journal ; 23(2):989-996, 2023.
Article in English | Scopus | ID: covidwho-2242146

ABSTRACT

The provision of physical healthcare services during the isolation phase is one of the major challenges associated with the current COVID-19 pandemic. Smart healthcare services face a major challenge in the form of human behavior, which is based on human activities, complex patterns, and subjective nature. Although the advancement in portable sensors and artificial intelligence has led to unobtrusive activity recognition systems, very few studies deal with behavior tracking for addressing the problem of variability and behavior dynamics. In this regard, we propose the fusion of PRocess mining and Paravector Tensor (PROMPT)-based physical health monitoring framework that not only tracks subjective human behavior, but also deals with the intensity variations associated with inertial measurement units. Our experimental analysis of a publicly available dataset shows that the proposed method achieves 14.56% better accuracy in comparison to existing works. We also propose a generalized framework for healthcare applications using wearable sensors and the PROMPT method for its triage with physical health monitoring systems in the real world. © 2001-2012 IEEE.

11.
EURASIP J Adv Signal Process ; 2023(1): 18, 2023.
Article in English | MEDLINE | ID: covidwho-2246203

ABSTRACT

A large number of epidemics, including COVID-19 and SARS, quickly swept the world and claimed the precious lives of large numbers of people. Due to the concealment and rapid spread of the virus, it is difficult to track down individuals with mild or asymptomatic symptoms with limited human resources. Building a low-cost and real-time epidemic early warning system to identify individuals who have been in contact with infected individuals and determine whether they need to be quarantined is an effective means to mitigate the spread of the epidemic. In this paper, we propose a smartphone-based zero-effort epidemic warning method for mitigating epidemic propagation. Firstly, we recognize epidemic-related voice activity relevant to epidemics spread by hierarchical attention mechanism and temporal convolutional network. Subsequently, we estimate the social distance between users through sensors built-in smartphone. Furthermore, we combine Wi-Fi network logs and social distance to comprehensively judge whether there is spatiotemporal contact between users and determine the duration of contact. Finally, we estimate infection risk based on epidemic-related vocal activity, social distance, and contact time. We conduct a large number of well-designed experiments in typical scenarios to fully verify the proposed method. The proposed method does not rely on any additional infrastructure and historical training data, which is conducive to integration with epidemic prevention and control systems and large-scale applications.

12.
11th IEEE Global Conference on Consumer Electronics, GCCE 2022 ; : 172-176, 2022.
Article in English | Scopus | ID: covidwho-2236148

ABSTRACT

Recording indoor human activities such as room occupancy is important to control the COVID-19 pandemic. Logs of human activities can be recorded using wearable devices, provided that the action of entering or exiting a room can be recognized based on the operation of doors. However, relatively few studies on human activity recognition have considered the detection of door operations using wearable devices. In this study, we propose a new deep learning-based technique to detect door operations. We developed a smartwatch application to collect and label multiple forms of data. To evaluate the proposed approach, we conducted an experiment in which we collected data during 4 door operations (2 types of doors with 2 activities, including entering and exiting) using the application. The collected data were then used to train deep learning models. The experimental results show that the average F1 scores ranged from 0.787 to 0.909 when acceleration and angular velocity data were used, which suggests that the proposed technique can detect door operations sufficiently well. © 2022 IEEE.

13.
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies ; 6(4), 2023.
Article in English | Scopus | ID: covidwho-2214058

ABSTRACT

A user often needs training and guidance while performing several daily life procedures, e.g., cooking, setting up a new appliance, or doing a COVID test. Watch-based human activity recognition (HAR) can track users' actions during these procedures. However, out of the box, state-of-the-art HAR struggles from noisy data and less-expressive actions that are often part of daily life tasks. This paper proposes PrISM-Tracker, a procedure-tracking framework that augments existing HAR models with (1) graph-based procedure representation and (2) a user-interaction module to handle model uncertainty. Specifically, PrISM-Tracker extends a Viterbi algorithm to update state probabilities based on time-series HAR outputs by leveraging the graph representation that embeds time information as prior. Moreover, the model identifies moments or classes of uncertainty and asks the user for guidance to improve tracking accuracy. We tested PrISM-Tracker in two procedures: latte-making in an engineering lab study and wound care for skin cancer patients at a clinic. The results showed the effectiveness of the proposed algorithm utilizing transition graphs in tracking steps and the efficacy of using simulated human input to enhance performance. This work is the first step toward human-in-the-loop intelligent systems for guiding users while performing new and complicated procedural tasks. © 2023 Owner/Author.

14.
2022 Research, Invention, and Innovation Congress: Innovative Electricals and Electronics, RI2C 2022 ; : 101-105, 2022.
Article in English | Scopus | ID: covidwho-2136466

ABSTRACT

People have been encouraged to wear masks and avoid touching their faces in public as part of the new measures to prevent the spread of coronavirus 2019 (COVID-19). During the COVID-19 epidemic, few research have examined the effect of everyday living on the frequency of facial touch activity. To develop a face touching avoidance system, deep learning algorithms have been proposed and have demonstrated their amazing performance. However, an important drawback of deep learning is its extensive dependence on hyperparameters. The results of deep learning algorithms may vary depending on hyperparameters, such as the size of the filters, the number of filters, the batch size, the number of epochs, and the training optimization technique used. In this paper, we present an effective approach for hyperparameter tuning of convolutional neural networks (CNNs) for efficiently recognized face touching activities based on accelerometer data. Two hyperparameter tuning methods (Grid search and Bayesian optimization) were evaluated in order to construct the CNN with high performance. The experiment results show that Bayesian optimization can provide suitable hyperparameters for CNNs for face touching recognition with the highest accuracy of 96.61%. © 2022 IEEE.

15.
EURASIP J Adv Signal Process ; 2022(1): 103, 2022.
Article in English | MEDLINE | ID: covidwho-2089238

ABSTRACT

Delivering health care at home emerged as a key advancement to reduce healthcare costs and infection risks, as during the SARS-Cov2 pandemic. In particular, in motor training applications, wearable and portable devices can be employed for movement recognition and monitoring of the associated brain signals. This is one of the contexts where it is essential to minimize the monitoring setup and the amount of data to collect, process, and share. In this paper, we address this challenge for a monitoring system that includes high-dimensional EEG and EMG data for the classification of a specific type of hand movement. We fuse EEG and EMG into the magnitude squared coherence (MSC) signal, from which we extracted features using different algorithms (one from the authors) to solve binary classification problems. Finally, we propose a mapping-and-aggregation strategy to increase the interpretability of the machine learning results. The proposed approach provides very low mis-classification errors ( < 0.1 ), with very few and stable MSC features ( < 10 % of the initial set of available features). Furthermore, we identified a common pattern across algorithms and classification problems, i.e., the activation of the centro-parietal brain areas and arm's muscles in 8-80 Hz frequency band, in line with previous literature. Thus, this study represents a step forward to the minimization of a reliable EEG-EMG setup to enable gesture recognition.

16.
21st IFIP WG 6.11 Conference on e-Business, e-Services, and e-Society, I3E 2022 ; 13454 LNCS:148-163, 2022.
Article in English | Scopus | ID: covidwho-2048111

ABSTRACT

The accessibility of datasets that capture the performance of Activities of Daily Living is limited by the difficulties in setting up test beds. The Covid-19 pandemic recently compounded such challenges. Smart Environments employed as test-beds consist of sensors and applications formulated to develop a comfortable and safe environment for their inhabitants. Despite the increase in quantities of Smart Environments, accessibility of these spaces for researchers has become even more challenging amidst a pandemic. Computing power has enabled researchers to generate virtual Smart Environments with fewer overheads and less complexity. This article proposes an Extended Smart Environment Simulator (ESESIM), with multiple inhabitants possibly utilised for dataset generation. The proposed simulation tool has a virtual space with multiple script-regulated inhabitants. While the various inhabitants probe the Smart Environment, sensor readings are recorded and stored in a dataset. The virtual space developed in this study generated synthetic datasets that can be employed for Human Activity recognition in machine learning. This study also evaluated two deep machine learning models and performance mechanisms in recognising four activities of daily living, namely personal hygiene, dressing, cooking and sleeping, on the SESIM dataset. Findings from this study indicate that simulation can be used as a tool for generating human activity datasets. © 2022, IFIP International Federation for Information Processing.

17.
IEEE Sensors Journal ; : 1-1, 2022.
Article in English | Scopus | ID: covidwho-2018957

ABSTRACT

The recent SARS-COV-2 virus, also known as COVID-19, badly affected the world’s healthcare system due to limited medical resources for a large number of infected human beings. Quarantine helps in breaking the spread of the virus for such communicable diseases. This work proposes a non-wearable/contactless system for human location and activity recognition using ubiquitous wireless signals. The proposed method utilizes the Channel State Information (CSI) of the wireless signals recorded through a low-cost device for estimating the location and activity of the person under quarantine. We propose to utilize a Siamese architecture with combined one-dimensional Convolutional Neural Networks (1D-CNN) and Bi-directional long-short term memory (Bi-LSTM) networks. The proposed method provides high accuracy for the joint task and is validated on two real-world testbeds. First, using the designed low-cost CSI recording hardware, and second, on a public dataset for joint activity and location estimation. The HAR results outperform state-of-the-art machine and deep learning methods, and localization results are comparable with the existing methods. IEEE

18.
IEEE Sensors Journal ; : 1-1, 2022.
Article in English | Scopus | ID: covidwho-2018954

ABSTRACT

The provision of physical healthcare services during the isolation phase is one of the major challenges associated with the current COVID-19 pandemic. Smart healthcare services face a major challenge in the form of human behavior, which is based on human activities, complex patterns, and subjective nature. Although the advancement in portable sensors and artificial intelligence has led to unobtrusive activity recognition systems but very few studies deal with behavior tracking for addressing the problem of variability and behavior dynamics. In this regard, we propose the fusion of PRocess mining and Paravector Tensor (PROMPT) based physical health monitoring framework that not only tracks subjective human behavior, but also deals with the intensity variations associated with inertial measurement units. Our experimental analysis on a publicly available dataset shows that the proposed method achieves 14.56% better accuracy in comparison to existing works. We also propose a generalized framework for healthcare applications using wearable sensors and the PROMPT method for its triage with physical health monitoring systems in the real world. IEEE

19.
19th IEEE Annual Consumer Communications and Networking Conference, CCNC 2022 ; : 393-398, 2022.
Article in English | Scopus | ID: covidwho-1992580

ABSTRACT

The COVID-19 pandemic has presented social challenges to establish the new normal lifestyle in our daily lives. The goal of this paper is to enable easy and low-cost monitoring of cleaning activity to keep a clean environment for preventing infection. Although human activity recognition has been a hot research topic in pervasive computing, existing schemes have not been optimized for monitoring cleaning activities. To address this issue, this paper provides an initial concept and preliminary experimental results of cleaning activity recognition using accelerometer data and RFID tags. In the proposed scheme, machine learning technologies and short range wireless communication are employed for recognizing the time and place of wiping as an example of cleaning activities, because it is an important activity for shared places to avoid infection. This paper reports the evaluation results on the recognition accuracy using the proof-of-concept (PoC) implementation to clarify the required sampling rate and time-window size for further experiments. Also, a real-time feedback system is implemented to provide the monitoring results for users. The proposed scheme contributes for efficient monitoring of cleaning activities for creating the new normal era. © 2022 IEEE.

20.
17th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2021, in conjunction with the 14th International Conference on Frontiers of Information Technology, Applications and Tools, FITAT 2021 ; 278:343-351, 2022.
Article in English | Scopus | ID: covidwho-1971599

ABSTRACT

There are one million members of fitness sports in Taiwan. During the epidemic of COVID-19, they cannot enjoy fitness activity under the guidance of coaches as usual. Under the individual guidance of digital coaches, anytime and anyplace to enjoy safe sports seem to be the future trend. Therefore, this paper proposes an approach based on high-precision real-time motion recognition to digital coaches for aerobic boxing. Boxer wears two bracelets to interact with the coach. Through the embedding of inertial sensors and high-efficient micro-controller on the bracelet, the proposed approach meets the needs of everyone to instantly identify the correctness of actions when exercising alone. To evaluate the performance of the proposed approach, two activities such as swing boxing and hook boxing are verified. Numeric results show that the recognition accuracy of 93% and 85.71% is for swing boxing and hook boxing, respectively. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

SELECTION OF CITATIONS
SEARCH DETAIL